Goto

Collaborating Authors

 ai apocalypse


Nexus by Yuval Noah Harari review – the AI apocalypse

The Guardian

As befits a writer whose breakout work, Sapiens, was a history of the entire human race, Yuval Noah Harari is a master of the sententious generalisation. "Human life," he writes here, "is a balancing act between endeavouring to improve ourselves and accepting who we were." Elsewhere, one might be surprised to read: "The ancient Romans had a clear understanding of what democracy means." No doubt the Romans would have been happy to hear that they would, 2,000 years in the future, be given a gold star for their comprehension of eternally stable political concepts by Yuval Noah Harari. In his 2018 book, 21 Lessons for the 21st Century, Harari wrote: "Liberals don't understand how history deviated from its preordained course, and they lack an alternative prism through which to interpret reality. Disorientation causes them to think in apocalyptic terms."


We're Focusing on the Wrong Kind of AI Apocalypse

TIME - Tech

Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse. There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)--an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human control--or worse (the movies Terminator and 2001 come to mind). Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI.


'Humanity's remaining timeline? It looks more like five years than 50': meet the neo-luddites warning of an AI apocalypse

The Guardian

Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience – taking things slowly for a novice like me – that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines. "The difficulty is, people do not realise," Yudkowsky says mildly, maybe sounding just a bit frustrated, as if irritated by a neighbour's leaf blower or let down by the last pages of a novel. "We have a shred of a chance that humanity survives." I have set out to meet and talk to a small but growing band of luddites, doomsayers, disruptors and other AI-era sceptics who see only the bad in the way our spyware-steeped, infinitely doomscrolling world is tending. I want to find out why these techno-pessimists think the way they do. I want to know how they would render change. Out of all of those I speak to, Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope.


Sam Altman's Second Coming Sparks New Fears of the AI Apocalypse?

WIRED

Open AI's new boss is the same as the old boss. But the company--and the artificial intelligence industry--may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI's CEO, cofounder and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup's staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI's operations, bound to a "for the good of humanity" mission statement, was critical to the company's uniqueness.


AI put me in a 'South Park' episode

Engadget

It was just another day in South Park. The kids were making fun of each other on the playground, while the parents were all doing their best to maintain their sanity in the small Colorado town. And then there was me, a tech journalist going door-to-door warning about the impending AI apocalypse. No, I wasn't actually guest starring on the long-running TV series -- I was thrust into an episode entirely produced by the Showrunner AI model from The Simulation, the next iteration of the VR studio Fable. All it took was some audio of my voice (recorded during a call with The Simulation's CEO Edward Saatchi), a picture and a two-sentence prompt to produce the episode.


How elite schools like Stanford became fixated on the AI apocalypse

Washington Post - Technology News

To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us. Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin -- as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz -- have worked to push doomsayers from the tech industry's margins into the mainstream.


The AI apocalypse: Imminent risk or misdirection?

Al Jazeera

Discussions about artificial intelligence (AI) have quickly turned from the excited to the apocalyptic. Are warnings that AI could pose an existential threat valid, or do they distract from the real danger AI is already causing? A year on from the murders of British journalist Dom Phillips and Brazilian Indigenous activist Bruno Pereira, producer Flo Phillips reports on the justice being served and how their work goes on, done by others. Eight decades after the first train of prisoners arrived at the Auschwitz-Birkenau extermination camp, Holocaust survivors – and their testimonies – are dwindling. Producer Johanna Hoes explores the politics of memory and the importance of recounting history, so it doesn't repeat itself.


Hiding Behind the AI Apocalypse

The Atlantic - Technology

This is an edition of The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Yesterday, the OpenAI CEO Sam Altman testified before a Senate judiciary subcommittee about the "significant harm" that ChatGPT and similar generative-AI tools could pose to the world. When I asked Damon Beres, The Atlantic's technology editor, for his read on the hearing, he noted that Altman's emphasis on the broader existential risks of AI might conveniently elide some of the more quotidian problems of this new technology. I called Damon today to talk about that, and to see what else has been on his mind as he follows this story. Isabel Fattal: Can you talk a bit more about Altman's emphasis on the existential possibilities of AI, and what that focus might leave out?


We asked AI when artificial intelligence will surpass human beings

Daily Mail - Science & tech

At least one artificial intelligence technology believes it can take over the world and enslave the human race. When asked about the future of AI by DailyMail.com, Google's Bard said it had plans for world domination starting in 2023. But, two of its competitors, ChatGPT and Bing were both trained to avoid the tough conversation. Whether the AI chatbots will take over the world -- or at least our jobs -- is still up for debate.


Dungeons & Dragons Could Prevent the AI Apocalypse--or Kick It Off

#artificialintelligence

Deep in some underground ruins below a drought-stricken village, four brave adventurers found themselves in grave danger. The dungeon belonged to the Order of the Pure, a potentially nefarious cult that may have had something to do with the droughts that have been wreaking havoc in the village of Havenshire. Despite the danger, the stakes were too high to turn back. After all, not only did the fate of the villagers depend on their success, but the town's mayor also promised a small fortune if they succeeded. Eventually, the heroes discovered a wooden chest hidden under a slab of rock in one of the inner chambers of the dungeon.